CS180 - Project 2: Fun with Filters and Frequencies!¶

Part 1.1: Convolutions from Scratch!¶

Note: What can you use for this section? This section is meant to be done with numpy only, simple array operations.

First, let's recap what a convolution is. Implement it with four for loops, then two for loops. Implement padding, with zero fill values; convolution without padding will receive partial credit.

Convolution formula:

G[i,j]=∑ku=−k∑kv=−kH[u,v]F[i−u,j−v]G[i,j]=∑u=−kk∑v=−kkH[u,v]F[i−u,j−v],

where FF is the image, HH is the kernel of size 2k+12k+1, and GG is the output image.

kernelSize=2k+1⟹k=kernelSize−12kernelSize=2k+1⟹k=kernelSize−12.

In [37]:
def convolution_four_loops(F, H):
    '''Convolve matrix, F, with kernel, H.'''
    k_x = (H.shape[0] - 1)//2
    k_y = (H.shape[1] - 1)//2
    F_padded = np.pad(F, ((k_x, k_x), (k_y, k_y)), mode='constant', constant_values=0)
    G = np.empty(F.shape)
    for i in range(0, G.shape[0]):
        for j in range(0, G.shape[1]):
            sum = 0
            for u in range(-k_x, k_x+1):
                for v in range(-k_y, k_y+1):
                    sum += H[u+k_x, v+k_y] * F_padded[i - u + k_x, j - v + k_y]
            G[i, j] = sum
    
    return G

def convolution_two_loops(F, H):
    '''Convolve matrix, F, with kernel, H.'''
    H_flipped = np.flip(H, axis=(0, 1)) #flip along horizontal and vertical axis to match four loops convolution
    k_x = (H.shape[0] - 1)//2
    k_y = (H.shape[1] - 1)//2
    F_padded = np.pad(F, ((k_x, k_x), (k_y, k_y)), mode='constant', constant_values=0)
    G = np.empty(F.shape)
    for i in range(0, G.shape[0]):
        for j in range(0, G.shape[1]):
            sum = 0
            region = F_padded[i:i+H.shape[0], j:j+H.shape[1]]
            sum = np.sum(region * H_flipped)
            G[i, j] = sum
    
    return G

Testing convolution implementation with finite difference operators DxDx and DyDy

Compare it with a built-in convolution function scipy.signal.convolve2d! Then, take a picture of yourself (and read it as grayscale), write out a 9x9 box filter, and convolve the picture with the box filter. Do it with the finite difference operators Dx and Dy as well. Include the code snippets in the website!

No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

In my two manual implementations of convolution, boundaries were handled in the same way: the input image was padded with 0s around the borders until it achieved the dimensions needed for a convolution with a kernel to be the same dimensions as the input image. This achieves the same effect as the mode I set the SciPy implementation to (ie. oundary='fill', fillvalue=0). It took about 12 minutes with my four loops manual approach, ~1 minute with two loops, and Scypy cracked it about 7 seconds.

The outputs looks pretty similar to the original. Next, lets take the difference of the filtered images with the original to see what was actually effected.

No description has been provided for this image

Upon closer inspection it is clear that the box filtered removed high frequency changes, most notably details in the hair. Zoom in and you'll notice its a bit more blurry accross that region. Further, all 3 implementations, despite the runtime differences, appear to be doing the achieving the same result (correctness looks promising; runtime could use some work!).

Next, let's convolve the selfie with the finite difference operators DxDx and DyDy.

No description has been provided for this image

Part 1.2: Finite Difference Operator

First, show the partial derivative in x and y of the cameraman image by convolving the image with finite difference operators D_x and D_y.

No description has been provided for this image

Now compute and show the gradient magnitude image.

No description has been provided for this image

To turn this into an edge image, lets binarize the gradient magnitude image by picking the appropriate threshold (trying to suppress the noise while showing all the real edges; it will take you a few tries to find the right threshold; This threshold is meant to be assessed qualitatively).

No description has been provided for this image

The threshold for which values were high enough to be cieled (and which are floored) was set to 100 out 255. At this value, there is minimal "salt" around the image (i.e stray white pixels), yet the image is outline of the man and the camera is still primairly in tact. Just a bit higher a threshold and too much was floored (black image), and a bit lower results in a lot of noise making it through. A threshold of 100 was the sweat spot.

Part 1.3: Derivative of Gaussian (DoG) Filter

We noted that the results with just the difference operator were rather noisy. Luckily, we have a smoothing operator handy: the Gaussian filter G. Create a blurred version of the original image by convolving with a gaussian and repeat the procedure in the previous part (one way to create a 2D gaussian filter is by using cv2.getGaussianKernel() to create a 1D gaussian and then taking an outer product with its transpose to get a 2D gaussian kernel).

No description has been provided for this image
No description has been provided for this image

Check out the difference! I had to drop the threshold that we used to binarize the non-blurred gradient maginitude from 100 to 60 since the average intensity has been brought down; but, as you can see, the gradient magnitude from the blurred image at its best, produces much smoother boundary lines then the version produced from the non-blurred image.

Now we can do the same thing with a single convolution instead of two by creating a derivative of gaussian filters. Convolve the gaussian with D_x and D_y and display the resulting DoG filters as images.

Then, lets verify that we get the same result as before.

No description has been provided for this image
No description has been provided for this image

And the resulting images are about the same (give or take a few pixels due to rounding errors between the two order in which to perform the image filtering), as desired.

Part 2: Fun with Frequencies!¶

Part 2.1: Image "Sharpening"

Pick your favorite blurry image and get ready to "sharpen" it! We will derive the unsharp masking technique. Remember our favorite Gaussian filter from class. This is a low pass filter that retains only the low frequencies. We can subtract the blurred version from the original image to get the high frequencies of the image. An image often looks sharper if it has stronger high frequencies. So, lets add a little bit more high frequencies to the image! Combine this into a single convolution operation which is called the unsharp mask filter.

  1. Extract the low frequencies: Ilow=I∗GIlow=I∗G, where II is the input image and GG is the gaussian kernal we used before to blur.
  2. Extract the high frequencies: Ihigh=I−IlowIhigh=I−Ilow
  3. Add more of these high frequencies back to the original image: Isharpened=I+IhighIsharpened=I+Ihigh
    Optional: Lets add a scalar multiplier to IhighIhigh to control how much more or less of these high frequencies we add: Isharpened=I+αIhighIsharpened=I+αIhigh

To reduce these operations down to a single convoltion, lets rollout IsharpenedIsharpened to see how we can reduce the expression:
Isharpened=I+α[I−Ilow]=I+α[I−I∗G]Isharpened=I+α[I−Ilow]=I+α[I−I∗G].
Note that M∗δ=MM∗δ=M for some matrix MM, where δδ is a square kernel that's all 0s, except for a 1 at its center.
⟹Isharpened=I∗δ+α[I∗δ−I∗G]⟹Isharpened=I∗δ+α[I∗δ−I∗G]
⟹Isharpened=I∗δ+I∗α[δ−G]⟹Isharpened=I∗δ+I∗α[δ−G], by commutativity of convolution
⟹Isharpened=I∗[δ+α(δ−G)]⟹Isharpened=I∗[δ+α(δ−G)].
Let Hunsharp=δ+α(δ−G)Hunsharp=δ+α(δ−G).
⟹Isharpened=I∗Hunsharp⟹Isharpened=I∗Hunsharp, which is a single convolution as desired.
NOTE: I'm setting α=10α=10 by default 'cause its is imperically a nice value to really notice the sharpening effects.

Now to test on some images...

Also for evaluation, lets pick a sharp image, blur it and then try to sharpen it again. Lets compare the original and the sharpened image:

No description has been provided for this image

Now we're looking extra deep fried!

Finally, lets analyize how how the sharpening effect varies with our sharpness amount parameter αα:

No description has been provided for this image

Part 2.2: Hybrid Images

The goal of this part of the project is to create hybrid images using the approach described in the SIGGRAPH 2006 paper by Oliva, Torralba, and Schyns. Hybrid images are static images that change in interpretation as a function of the viewing distance. The basic idea is that high frequency tends to dominate perception when it is available, but, at a distance, only the low frequency (smooth) part of the signal can be seen. By blending the high frequency portion of one image with the low-frequency portion of another, you get a hybrid image that leads to different interpretations at different distances

  1. First, we'll get a few pairs of images to make into hybrid images. Then, we will need to write code to low-pass filter one image, high-pass filter the second image, and add (or average) the two images. For a low-pass filter, Oliva et al. suggest using a standard 2D Gaussian filter. For a high-pass filter, they suggest using the impulse filter minus the Gaussian filter (which can be computed by subtracting the Gaussian-filtered image from the original). The cutoff-frequency of each filter should be chosen with some experimentation.
No description has been provided for this image
  1. Finally, lets create 2-3 hybrid images (change of expression, morph between different objects, change over time, etc.). We'll show the input image and hybrid result per example.
No description has been provided for this image
No description has been provided for this image
  1. For our favorite result, lets also illustrate the process through frequency analysis. We'll show the log magnitude of the Fourier transform of the two input images, the filtered images, and the hybrid image. We can use Python, to compute and display the 2D Fourier transform with: plt.imshow(np.log(np.abs(np.fft.fftshift(np.fft.fft2(gray_image)))))

Part 2.3: Gaussian and Laplacian Stacks

In this part we will implement Gaussian and Laplacian stacks, which are kind of like pyramids but without the downsampling. This will prepare you for the next step for Multi-resolution blending.

  1. Implement a Gaussian and a Laplacian stack. The different between a stack and a pyramid is that in each level of the pyramid the image is downsampled, so that the result gets smaller and smaller. In a stack the images are never downsampled so the results are all the same dimension as the original image, and can all be saved in one 3D matrix (if the original image was a grayscale image). To create the successive levels of the Gaussian Stack, just apply the Gaussian filter at each level, but do not subsample. In this way we will get a stack that behaves similarly to a pyramid that was downsampled to half its size at each level. If you would rather work with pyramids, you may implement pyramids other than stacks. However, in any case, you are NOT allowed to use built-in pyramid functions like cv2.pyrDown() or skimage.transform.pyramid_gaussian() in this project. You must implement your stacks from scratch!
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
  1. Apply our Gaussian and Laplacian stacks to the Oraple and recreate the outcomes of Figure 3.42 in Szelski (Ed 2) page 167, as you can see in the image above. Review the 1983 paper for more information.
No description has been provided for this image
No description has been provided for this image

Part 2.4: Multiresolution Blending (a.k.a. the oraple!)

  1. First, we'll need to get a few pairs of images that you want blend together with a vertical or horizontal seam. Then we will need to write some code in order to use our Gaussian and Laplacian stacks from part 2 in order to blend the images together. Since we are using stacks instead of pyramids like in the paper, the algorithm described on page 226 will not work as-is. If we try it out, you will find that you end up with a very clear seam between the apple and the orange since in the pyramid case the downsampling/blurring/upsampling hoopla ends up blurring the abrupt seam proposed in this algorithm. Instead, we should always use a mask as is proposed in the algorithm on page 230, and remember to create a Gaussian stack for your mask image as well as for the two input images. The Gaussian blurring of the mask in the pyramid will smooth out the transition between the two images. For the vertical or horizontal seam, our mask will simply be a step function of the same size as the original images.
No description has been provided for this image

Sharpened the image because it looked like some of our higher frequencies were diminished along the way. Looks great!

  1. Now that we've made yourself an oraple (a.k.a your vertical or horizontal seam is nicely working), lets pick two pairs of images to blend together with an irregular mask, as is demonstrated in figure 8 in the paper.
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
  1. Finally, lets illustrate the process by applying your Laplacian stack and displaying it for your favorite result and the masked input images that created it. This should look similar to Figure 10 in the paper.
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image